Recently, a surge of high-quality 3D-aware GANs have been proposed, which leverage the generative power of neural rendering. It is natural to associate 3D GANs with GAN inversion methods to project a real image into the generator's latent space, allowing free-view consistent synthesis and editing, referred as 3D GAN inversion. Although with the facial prior preserved in pre-trained 3D GANs, reconstructing a 3D portrait with only one monocular image is still an ill-pose problem. The straightforward application of 2D GAN inversion methods focuses on texture similarity only while ignoring the correctness of 3D geometry shapes. It may raise geometry collapse effects, especially when reconstructing a side face under an extreme pose. Besides, the synthetic results in novel views are prone to be blurry. In this work, we propose a novel method to promote 3D GAN inversion by introducing facial symmetry prior. We design a pipeline and constraints to make full use of the pseudo auxiliary view obtained via image flipping, which helps obtain a robust and reasonable geometry shape during the inversion process. To enhance texture fidelity in unobserved viewpoints, pseudo labels from depth-guided 3D warping can provide extra supervision. We design constraints aimed at filtering out conflict areas for optimization in asymmetric situations. Comprehensive quantitative and qualitative evaluations on image reconstruction and editing demonstrate the superiority of our method.
translated by 谷歌翻译
Dialogue state tracking (DST) aims to convert the dialogue history into dialogue states which consist of slot-value pairs. As condensed structural information memorizing all history information, the dialogue state in the last turn is typically adopted as the input for predicting the current state by DST models. However, these models tend to keep the predicted slot values unchanged, which is defined as state momentum in this paper. Specifically, the models struggle to update slot values that need to be changed and correct wrongly predicted slot values in the last turn. To this end, we propose MoNET to tackle state momentum via noise-enhanced training. First, the previous state of each turn in the training data is noised via replacing some of its slot values. Then, the noised previous state is used as the input to learn to predict the current state, improving the model's ability to update and correct slot values. Furthermore, a contrastive context matching framework is designed to narrow the representation distance between a state and its corresponding noised variant, which reduces the impact of noised state and makes the model better understand the dialogue history. Experimental results on MultiWOZ datasets show that MoNET outperforms previous DST methods. Ablations and analysis verify the effectiveness of MoNET in alleviating state momentum and improving anti-noise ability.
translated by 谷歌翻译
CNN-based surrogates have become prevalent in scientific applications to replace conventional time-consuming physical approaches. Although these surrogates can yield satisfactory results with significantly lower computation costs over small training datasets, our benchmarking results show that data-loading overhead becomes the major performance bottleneck when training surrogates with large datasets. In practice, surrogates are usually trained with high-resolution scientific data, which can easily reach the terabyte scale. Several state-of-the-art data loaders are proposed to improve the loading throughput in general CNN training; however, they are sub-optimal when applied to the surrogate training. In this work, we propose SOLAR, a surrogate data loader, that can ultimately increase loading throughput during the training. It leverages our three key observations during the benchmarking and contains three novel designs. Specifically, SOLAR first generates a pre-determined shuffled index list and accordingly optimizes the global access order and the buffer eviction scheme to maximize the data reuse and the buffer hit rate. It then proposes a tradeoff between lightweight computational imbalance and heavyweight loading workload imbalance to speed up the overall training. It finally optimizes its data access pattern with HDF5 to achieve a better parallel I/O throughput. Our evaluation with three scientific surrogates and 32 GPUs illustrates that SOLAR can achieve up to 24.4X speedup over PyTorch Data Loader and 3.52X speedup over state-of-the-art data loaders.
translated by 谷歌翻译
安全与其他交通参与者的互动是自动驾驶的核心要求之一,尤其是在交叉点和遮挡中。大多数现有的方法都是为特定场景设计的,需要大量的人工劳动参数调整,以应用于不同情况。为了解决这个问题,我们首先提出了一个基于学习的交互点模型(IPM),该模型描述了代理与保护时间和交互优先级之间的相互作用以统一的方式。我们将提出的IPM进一步整合到一个新颖的计划框架中,通过在高度动态的环境中的全面模拟来证明其有效性和鲁棒性。
translated by 谷歌翻译
尽管变形金刚在段落的生成中取得了重大成功,但它们将句子视为令牌的线性序列,并且经常忽略其层次结构信息。先前的工作表明,输入令牌分解粒度〜(例如,单词,短语或句子)的水平已产生实质性改进,这表明可以通过更细粒度的粒度建模来增强变形金刚。在这项工作中,我们提出了粒度生成(C-DNPG)的粒度连续分解。为了有效地将粒度纳入编码句子中,C-DNPG引入了一种粒度感知的注意力(GA-注意)机制,该机制扩展了多头自我注意力,以:1)自动渗透句子的粒度头,该机制自动渗透了句子的等级结构通过神经估计每个输入令牌的粒度水平; 2)两个新的注意力面膜,即粒度共振和粒度范围,以有效地将粒度编码为注意力。在两个基准测试的实验(包括Quora问题对和Twitter URL)上表明,C-DNPG的表现优于基线模型,而在许多指标方面,C-DNPG的基线模型优于基线模型。定性分析表明,C-DNPG确实具有有效性捕获细粒度的粒度水平。
translated by 谷歌翻译
由于其广泛的应用,例如自动驾驶,机器人技术等,认识到Point Cloud视频的人类行为引起了学术界和行业的极大关注。但是,当前的点云动作识别方法通常需要大量的数据,其中具有手动注释和具有较高计算成本的复杂骨干网络,这使得对现实世界应用程序不切实际。因此,本文考虑了半监督点云动作识别的任务。我们提出了一个蒙版的伪标记自动编码器(\ textbf {Maple})框架,以学习有效表示,以较少的注释以供点云动作识别。特别是,我们设计了一个新颖有效的\ textbf {de}耦合\ textbf {s} patial- \ textbf {t} emporal trans \ textbf {pert}(\ textbf {destbrof {destformer})作为maple的backbone。在Destformer中,4D点云视频的空间和时间维度被脱钩,以实现有效的自我注意,以学习长期和短期特征。此外,要从更少的注释中学习判别功能,我们设计了一个蒙版的伪标记自动编码器结构,以指导Destformer从可用框架中重建蒙面帧的功能。更重要的是,对于未标记的数据,我们从分类头中利用伪标签作为从蒙版框架重建功能的监督信号。最后,全面的实验表明,枫树在三个公共基准上取得了优异的结果,并且在MSR-ACTION3D数据集上以8.08 \%的精度优于最先进的方法。
translated by 谷歌翻译
当前的点云检测方法由于其有限的概括能力而难以检测现实世界中的开放式摄制对象。此外,收集和注释带有许多类别的对象的点云检测数据集和完全注释点云检测数据集非常艰辛,而且昂贵 - 云检测。据我们所知,我们是第一个研究开放式3D点云检测问题的问题。我们没有寻求带有完整标签的点云数据集,而是求助于ImagEnet1k,以扩大点云检测器的词汇。我们建议使用图像级的类监督OV-3DETIC,这是一种开放式摄影3D检测器。具体而言,我们利用了两种模式,即用于识别的图像模态和定位的点云模态,以生成看不见类的伪标签。然后,我们提出了一种新颖的跨模式对比度学习方法,将知识从图像模态转移到训练过程中的点云模态。 OV-3Detic在不损害推理期间的延迟的情况下使点云检测器能够实现开放式摄影检测。广泛的实验表明,所提出的OV-3DETIC分别在Sun-RGBD数据集和Scannet数据集上分别在SUN-RGBD数据集和扫描仪数据集上分别实现了至少10.77%的地图改进(绝对值)和9.56%的地图改进(绝对值)。此外,我们进行了足够的实验,以阐明为什么提出的OV-3Detic作品。
translated by 谷歌翻译
通过确保学习算法中的差异隐私,可以严格降低大型模型记忆敏感培训数据的风险。在本文中,我们为此目的研究了两种算法,即DP-SGD和DP-NSGD,它们首先剪辑或归一化\ textIt \ textIt {每样本}梯度以绑定灵敏度,然后添加噪声以使精确信息混淆。我们通过两个常见的假设分析了非凸优化设置中这两种算法的收敛行为,并实现了$ \ nathcal {o} \ left(\ sqrt [4] {\ frac {\ frac {d \ log(1/\ delta) )} {n^2 \ epsilon^2}} \ right)$ $ d $ - 二维模型,$ n $ samples和$(\ epsilon,\ delta)$ - dp,它改进了以前的改进在较弱的假设下的界限。具体而言,我们在DP-NSGD中引入了一个正规化因素,并表明它对融合证明至关重要,并巧妙地控制了偏见和噪声权衡。我们的证明故意处理针对私人环境指定的按样本梯度剪辑和标准化。从经验上讲,我们证明这两种算法达到了相似的最佳准确性,而DP-NSGD比DP-SGD更容易调整,因此在计算调整工作时可能有助于进一步节省隐私预算。
translated by 谷歌翻译
视频的行动识别,即将视频分类为预定义的动作类型之一,一直是人工智能,多媒体和信号处理社区中的一个流行话题。但是,现有方法通常考虑一个整体上的输入视频并学习模型,例如卷积神经网络(CNNS),并带有粗糙的视频级别类标签。这些方法只能为视频输出一个动作类,但不能提供可解释的线索来回答为什么视频显示特定的动作。因此,研究人员开始专注于一项新任务,部分级别的动作解析(PAP),该作用不仅旨在预测视频级别的动作,而且还要认识到每个人的框架级别的细粒度的动作或身体部位的相互作用在视频中。为此,我们为这项具有挑战性的任务提出了一个粗到精细的框架。特别是,我们的框架首先预测输入视频的视频级别类别,然后将身体部位定位并预测零件级别的动作。此外,为了平衡部分级别的动作解析的准确性和计算,我们建议通过段级特征识别零件级的操作。此外,为了克服身体部位的歧义,我们提出了一种姿势引导的位置嵌入方法来准确地定位身体部位。通过在大规模数据集(即动力学TPS)上进行的全面实验,我们的框架可以实现最先进的性能,并且超过31.10%的ROC得分的现有方法。
translated by 谷歌翻译
在图像美学质量评估的任务中,由于美学数据集的正常分布,难以达到高分区域和低得分面积。为了减少标签中的错误并解决正常数据分布的问题,我们提出了一个具有名为AMD-CR的分类和回归的新的美学混合数据集,我们培训了元重传网络以重新重量培训数据的损失不同。此外,我们还提供了一种基于二进制分类任务的伪标签的不同阶段的培训策略,然后我们将其用于审美培训,该课程涉及分类和回归任务的不同阶段。在网络结构的构造中,我们构建一种可以适应输入图像的任何大小的美学自适应块(AAB)结构。此外,我们还使用高效的通道注意力(ECA)来加强每个任务的特征提取能力。实验结果表明,与SROCC中的常规方法相比,我们的方法改善了0.1112。该方法还可以帮助找到无人驾驶飞行器(UAV)和车辆的最佳审美路径规划。
translated by 谷歌翻译